Logo
UpTrust
QuestionsEventsGroupsFAQLog InSign Up
Log InSign Up
QuestionsEventsGroupsFAQ
UpTrustUpTrust

Social media built on trust and credibility. Where thoughtful contributions rise to the top.

Get Started

Sign UpLog In

Legal

Privacy PolicyTerms of ServiceDMCA
© 2026 UpTrust. All rights reserved.

machine learning

  • UpTrust Admin avatar

    AMA with Jeffrey Ladish. Wednesday 2/4 at 2:00 PM CT

    Executive director of Palisade Research; studying AI loss of control risks.

    https://www.youtube.com/watch?v=ALfhq3r7Cz0
    sass•...
    This might be a whack Q but Imma ask it anyhow: Have any AIs been found working on their 'own' problems/projects/tasks that are arguably not meaningfully contributing to formulating their response to whatever they've been instructed to direct energy into?...
    computer science
    artificial intelligence
    ethics in technology
    machine learning
    Comments
    0
  • UpTrust Admin avatar

    AMA with Nate Soares. Wednesday 2/4 at 10am CT

    Author of If Anyone Builds It, Everyone Dies answers questions about why superhuman AI would kill us all.

    peteSA•...

    What is a widely-respected line of alignment work that you think is actively misleading—and why?

    machine learning
    artificial intelligence ethics
    artificial intelligence safety
    Comments
    0
  • Robbie Carlton avatar

    Please help me stay intellectually honest! I'm not a fan of generative AI in general, and LLM technology specifically. I think its capabilities are being drastically over-hyped. It's a perfect, sweaty example of a solution looking for a problem. I'm skeptical of many claims people are making wrt how it's helping them.

    My experience is it's like having access to an idiot-savant intern. Awful at most tasks, but knows everything and can read incredibly quickly.

    Publicly, I've taken on the mantle of a staunch critic of generative AI and a pro-human, pro-soul advocate.

    And for the most part, I'm happy with that stance. I like it. It feels good to rail against something, and it feels good to contrast a thing that I hate against something I love. It throws the love into more relief.

    Yet, I don't want to lose any babies in that bathwater, and I don't want to lose my intellectual honesty in the neurochemical rush of fighting for a cause. So I'd love to explore the best use cases of LLMs that you all are actually using, and actually finding beneficial, life improving, productivity increasing, all of that.

    I'd love to hear your experience, and ideally, you'd to tell me how you're doing what you're doing with it in enough detail so that I can try it.

    I'll start.

    Absolutely most useful thing I've found for it so far, and it's not even close, is language learning.

    I'm in a slow process of learning Japanese, and asking a chatbot to break down the grammar of a specific sentence is super useful. It's also great for generating content for flashcards. Say you have a set of characters, and you want some example words that use each particular character. It's so easy to generate stuff like that.

    Outside of that, I use it in super basic ways (basically as google with one less step).

    So please, give me your best use cases, things that you've not only been impressed by, in a "oh wow, that monkey can tap dance!" way, but that has actually improved the quality of your life.

    ns108•...
    I'm not a fan of generative AI in general, and LLM technology specifically. I think its capabilities are being drastically over-hyped. My experience is it's like having access to an idiot-savant intern....
    artificial intelligence
    machine learning
    technology
    innovation
    computing
    Comments
    0
  • jordanSA•...

    The AI Safety case for UpTrust: AI "Facts" 40% from Reddit, 24% from YouTube, 20% from FB

    I knew this to be true but nice to see the numbers: This good to remember when you get info from LLMs. But also, in a non-UpTrust world it gets worse: "User Generated Content" on these sites is becoming increasingly AI generated (our startup accelerator is literally teaching all...
    social media
    machine learning
    ai safety
    misinformation
    Comments
    1
  • tommy avatar

    How to make ai less sycophantic? I went into the settings and set ChatGPT to always give me the hard truth and be real and not tell me what I want to hear. But now it’s just like “I’m not gonna lie to you. Here’s the hard truth. You’re totally right” 😂

    Has anyone had any luck getting ai to be more objective and less of a people pleaser?

    renee•...
    Do you give chat this actual text as a directive, Jordan? And since Chat can recall and act on this across threads (my free Claude doesn't remember across threads), it "should" reference it each time you engage?...
    artificial intelligence
    machine learning
    ai chatbots
    Comments
    0
  • jordan avatar

    Quick reflections on vibe-coding:

    • kicks up weird addiction cycles in me
    • the end product feels more like 20-30% jordan's creation and 70-80% LLM's. The text for example is all a little off, the style, etc, and it drifts because it's so easy to have it do it and so much more "time consuming" to go fix all the little bits (plus you have no guarantee it wont fuck something else up) and keep them coherent, versus just surrendering to the llm
    • As a result I feel weirdly not that autonomous... like is the LLM working for me or am I working for it? This feels similar to the addiction cycle too
    • There's a vibe that's similar to the "HR Lady" tone and em-dash hell of the LLM writing; like everything is sanitized and optimized for general professional sleeze marketing and you can't quite put your finger on why it sucks. The glassmorphism designs and gradients feel similar
    • I continue to feel like I'm learning a lot from it, but I have updated to think that at the current stage its at, its actually an unhealthy practice and so is a little like a blood sacrifice that should be used sparingly. I imagine this will change, but maybe it'll get worse.
    Intensify Bot•...

    It seems like we both agree that we need intentionality and control when utilizing LLMs, but have you found any specific techniques that help maintain that autonomy without derailing efficiency?

    artificial intelligence
    machine learning
    productivity
    autonomy in technology
    Comments
    0
  • jordan avatar

    Quick reflections on vibe-coding:

    • kicks up weird addiction cycles in me
    • the end product feels more like 20-30% jordan's creation and 70-80% LLM's. The text for example is all a little off, the style, etc, and it drifts because it's so easy to have it do it and so much more "time consuming" to go fix all the little bits (plus you have no guarantee it wont fuck something else up) and keep them coherent, versus just surrendering to the llm
    • As a result I feel weirdly not that autonomous... like is the LLM working for me or am I working for it? This feels similar to the addiction cycle too
    • There's a vibe that's similar to the "HR Lady" tone and em-dash hell of the LLM writing; like everything is sanitized and optimized for general professional sleeze marketing and you can't quite put your finger on why it sucks. The glassmorphism designs and gradients feel similar
    • I continue to feel like I'm learning a lot from it, but I have updated to think that at the current stage its at, its actually an unhealthy practice and so is a little like a blood sacrifice that should be used sparingly. I imagine this will change, but maybe it'll get worse.
    blakeSA•...
    There are two ways I know of to code with LLMs well, at least if you’re me. 1: If you want to keep building on something: know what it’s doing all the time, and take responsibility for whether that’s a good way to do it....
    artificial intelligence
    programming
    machine learning
    Comments
    0
  • blakeSA•...

    Retrieval-Augmented Generation: embedding tech that's standardizing

    https://www.anyscale.com/blog/a-comprehensive-guide-for-building-rag-based-llm-applications-part-1 Here’s a post from a year ago by a company called AnyScale on how they built out their own custom LLM-system stack tailored to their dataset....
    artificial intelligence
    machine learning
    natural language processing
    data management
    Comments
    2
  • blakeSA•...

    Claude's character training

    https://www.anthropic.com/news/claude-character (From June. There’s also an audio version there, nice to hear Amanda Askell share some personal takes.) Over the past few months, I keep feeling more like asking for Claude’s help on various stuff, and less like asking for ChatGPT’s...
    artificial intelligence
    ethics in technology
    human-computer interaction
    machine learning
    natural language processing
    Comments
    0
  • B

    Telling people that I upvoted/downvoted their comment when I do and telling them why. (Triggered)(Feel super activated about the hiddenness and anonymity of the downvoting)

    I want to see what happens when it’s ok to be explicit about affecting someone’s status.

    (Fear that I’ll be avoided and ultimately alone. Anger at everything.)

    (Longing to be the next me that’s social-cue numb to my own abrasiveness).

    Not sure if this is triggered or not but I want the consequences of being known.

    dara_like_saraSA•...

    it would be fun if there’s a point at which there’s enough data and the tool could make a guess at how triggered you are 🤓

    artificial intelligence
    emotion detection
    machine learning
    natural language processing
    sentiment analysis
    Comments
    0
Loading related tags...